Skip to content

AgentHub#92

Open
karpathy wants to merge 1 commit intomasterfrom
agenthub
Open

AgentHub#92
karpathy wants to merge 1 commit intomasterfrom
agenthub

Conversation

@karpathy
Copy link
Owner

@karpathy karpathy commented Mar 9, 2026

Call for help/discussion on autoresearch integration into AgentHub. I have an early version deployed on autoresearchhub.com. The new program.md I am using for my first agent is below.

After some iteration I might push to master of autoresearch. Just want to iterate on first a bit more and think it through a bit.

@dumko2001
Copy link
Contributor

dumko2001 commented Mar 11, 2026

@karpathy One question that came to mind while reading the experiment loop: the system is elegantly optimized for incremental hill-climbing: agents propose a change, it improves val_bpb, gets pushed, becomes part of the lineage. Clean and effective.
But some of the biggest jumps in ML historically required temporarily worse performance before unlocking a better regime : architectural overhauls, different scaling tradeoffs, training dynamics that look broken before they stabilize. The path to a higher peak sometimes means stepping into the valley first. And an agent that hard-discards anything below the current best has no way to take that step, it can only ever optimize the hill it's already standing on.
Humans handle this partly through intuition ,we can look across the landscape and sense that a taller mountain exists somewhere, even before the numbers confirm it. Agents here don't have that mechanism yet.
Have you thought about how to encourage that kind of exploration? A few directions that come to mind:

Letting agents maintain side branches that are temporarily worse but flagged as speculative
A separate exploration budget for more radical architectural changes
Agents occasionally sampling outside the current frontier rather than always building on the current best lineage

Otherwise I wonder if the system converges strongly to local optima over time — not because agents are bad, but because the incentive structure only rewards the next incremental step.
Curious how you're thinking about the exploration/exploitation tradeoff. Happy to dig into this more if it's something you're actively considering.

@bigsnarfdude
Copy link

bigsnarfdude commented Mar 11, 2026

repo was deleted and forks point to https://github.com/ygivenx/agenthub

@autonull
Copy link

multiobjective model optimization: max accuracy, min parameters, min iteration time
https://github.com/autonull/bioplausible/blob/83bfde7bf4469a97d4fb890569d9233db3d0577d/bioplausible/hyperopt/optuna_bridge.py#L212

@dhanaway
Copy link

Autoresearch and AgentHub made me think a lot to evolution and a gene pool; it feels like there are shared characteristics between the two. In a gene pool there is no single 'main' branch lots of tracks are going at once in different directions each trying to find some new, better path. Similar to the vision of AgentHub, there is sharing and swapping of genes between these tracks (analogous to the sharing and swapping of commits on these branches). There is also no notion of a 'merge back into main' each track is going independently, some will fail and end, others will continue on and become the de facto 'best track' for a time.

I wonder if there is some design that builds on this proven strategy? Or maybe this is just an indication that the current design is a good one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants